Security Stop Press : LLM Malicious “Prompt Injection” Attack Warning

Written by: Paul |

The UK’s National Cyber Security Centre (NCSC) has warned of the susceptibility of existing Large Language Models (LLMs) to malicious “prompt injection” attacks. These are where a user creates inputs intended to cause an AI model to behave in an unintended way e.g., generating offensive content or disclosing confidential information. 

This means that businesses integrating LLMs like ChatGPT into their business, products, or services could be leaving themselves open to risks like inaccurate, controversial, or biased data content, data poisoning and concealed prompt injection attacks. 

The advice is for businesses to establish cybersecurity principles and make sure that they are able to deal with even the worst case scenario of whatever their LLM-powered app is permitted to do.